2 research outputs found

    GAN-CAN: A Novel Attack to Behavior-Based Driver Authentication Systems

    Get PDF
    openFor many years, car keys have been the sole mean of authentication in vehicles. Whether the access control process is physical or wireless, entrusting the ownership of a vehicle to a single token is prone to stealing attempts. Modern vehicles equipped with the Controller Area Network (CAN) bus technology collects a wealth of sensor data in real-time, covering aspects such as the vehicle, environment, and driver. This data can be processed and analyzed to gain valuable insights and solutions for human behavior analysis. For this reason, many researchers started developing behavior-based authentication systems. Many Machine Learning (ML) and Deep Learning models (DL) have been explored for behavior-based driver authentication, but the emphasis on security has not been a primary focus in the design of these systems. By collecting data in a moving vehicle, DL models can recognize patterns in the data and identify drivers based on their driving behavior. This can be used as an anti-theft system, as a thief would exhibit a different driving style compared to the vehicle owner. However, the assumption that an attacker cannot replicate the legitimate driver behavior falls under certain conditions. In this thesis, we propose GAN-CAN, the first attack capable of fooling state-of-the-art behavior-based driver authentication systems in a vehicle. Based on the adversary's knowledge, we propose different GAN-CAN implementations. Our attack leverages the lack of security in the CAN bus to inject suitably designed time-series data to mimic the legitimate driver. Our malicious time series data is generated through the integration of a modified reinforcement learning technique with Generative Adversarial Networks (GANs) with adapted training process. Furthermore we conduct a thorough investigation into the safety implications of the injected values throughout the attack. This meticulous study is conducted to guarantee that the introduced values do not in any way undermine the safety of the vehicle and the individuals inside it. Also, we formalize a real-world implementation of a driver authentication system considering possible vulnerabilities and exploits. We tested GAN-CAN in an improved version of the most efficient driver behavior-based authentication model in the literature. We prove that our attack can fool it with an attack success rate of up to 99%. We show how an attacker, without prior knowledge of the authentication system, can steal a car by deploying GAN-CAN in an off-the-shelf system in under 22 minutes. Moreover, by considering the safety importance of the injected values, we demonstrate that GAN-CAN can successfully deceive the authentication system without compromising the overall safety of the vehicle. This highlights the urgent need to address the security vulnerabilities present in behavior-based driver authentication systems. In the end, we suggest some possible countermeasures to the GAN-CAN attack.For many years, car keys have been the sole mean of authentication in vehicles. Whether the access control process is physical or wireless, entrusting the ownership of a vehicle to a single token is prone to stealing attempts. Modern vehicles equipped with the Controller Area Network (CAN) bus technology collects a wealth of sensor data in real-time, covering aspects such as the vehicle, environment, and driver. This data can be processed and analyzed to gain valuable insights and solutions for human behavior analysis. For this reason, many researchers started developing behavior-based authentication systems. Many Machine Learning (ML) and Deep Learning models (DL) have been explored for behavior-based driver authentication, but the emphasis on security has not been a primary focus in the design of these systems. By collecting data in a moving vehicle, DL models can recognize patterns in the data and identify drivers based on their driving behavior. This can be used as an anti-theft system, as a thief would exhibit a different driving style compared to the vehicle owner. However, the assumption that an attacker cannot replicate the legitimate driver behavior falls under certain conditions. In this thesis, we propose GAN-CAN, the first attack capable of fooling state-of-the-art behavior-based driver authentication systems in a vehicle. Based on the adversary's knowledge, we propose different GAN-CAN implementations. Our attack leverages the lack of security in the CAN bus to inject suitably designed time-series data to mimic the legitimate driver. Our malicious time series data is generated through the integration of a modified reinforcement learning technique with Generative Adversarial Networks (GANs) with adapted training process. Furthermore we conduct a thorough investigation into the safety implications of the injected values throughout the attack. This meticulous study is conducted to guarantee that the introduced values do not in any way undermine the safety of the vehicle and the individuals inside it. Also, we formalize a real-world implementation of a driver authentication system considering possible vulnerabilities and exploits. We tested GAN-CAN in an improved version of the most efficient driver behavior-based authentication model in the literature. We prove that our attack can fool it with an attack success rate of up to 99%. We show how an attacker, without prior knowledge of the authentication system, can steal a car by deploying GAN-CAN in an off-the-shelf system in under 22 minutes. Moreover, by considering the safety importance of the injected values, we demonstrate that GAN-CAN can successfully deceive the authentication system without compromising the overall safety of the vehicle. This highlights the urgent need to address the security vulnerabilities present in behavior-based driver authentication systems. In the end, we suggest some possible countermeasures to the GAN-CAN attack
    corecore